Automatically fixing software bugs is a challenging task. While recent work showed that natural language context is useful in guiding bug-fixing models, the approach required prompting developers to provide this context, which was simulated through commit messages written after the bug-fixing code changes were made. We instead propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for any additional information from developers. For this, we augment standard bug-fixing datasets with bug report discussions. Using these newly compiled datasets, we demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
translated by 谷歌翻译
For the majority of the machine learning community, the expensive nature of collecting high-quality human-annotated data and the inability to efficiently finetune very large state-of-the-art pretrained models on limited compute are major bottlenecks for building models for new tasks. We propose a zero-shot simple approach for one such task, Video Moment Retrieval (VMR), that does not perform any additional finetuning and simply repurposes off-the-shelf models trained on other tasks. Our three-step approach consists of moment proposal, moment-query matching and postprocessing, all using only off-the-shelf models. On the QVHighlights benchmark for VMR, we vastly improve performance of previous zero-shot approaches by at least 2.5x on all metrics and reduce the gap between zero-shot and state-of-the-art supervised by over 74%. Further, we also show that our zero-shot approach beats non-pretrained supervised models on the Recall metrics and comes very close on mAP metrics; and that it also performs better than the best pretrained supervised model on shorter moments. Finally, we ablate and analyze our results and propose interesting future directions.
translated by 谷歌翻译
Characterizing the patterns of errors that a system makes helps researchers focus future development on increasing its accuracy and robustness. We propose a novel form of "meta learning" that automatically learns interpretable rules that characterize the types of errors that a system makes, and demonstrate these rules' ability to help understand and improve two NLP systems. Our approach works by collecting error cases on validation data, extracting meta-features describing these samples, and finally learning rules that characterize errors using these features. We apply our approach to VilBERT, for Visual Question Answering, and RoBERTa, for Common Sense Question Answering. Our system learns interpretable rules that provide insights into systemic errors these systems make on the given tasks. Using these insights, we are also able to "close the loop" and modestly improve performance of these systems.
translated by 谷歌翻译
使用相对比心脏磁共振成像(PC-CMR)进行的流量分析可以量化用于评估心血管功能的重要参数。该分析的重要部分是鉴定正确的CMR视图和质量控制(QC),以检测可能影响流量定量的伪像。我们提出了一个新型的基于深度学习的框架,用于对完整CMR扫描的流量进行完全自动化的分析,该框架首先使用两个顺序卷积神经网络进行这些视图选择和QC步骤,然后进行自动主动脉和肺动脉分段,以实现对量化的量化。钥匙流参数。对于观察分类和QC,获得了0.958和0.914的精度值。对于细分,骰子分数为$> $ 0.969,而平淡的altman情节表示手动和自动峰流量值之间的一致性很高。此外,我们在外部验证数据集上测试了管道,结果表明管道的鲁棒性。这项工作是使用由986例病例组成的多生临床数据进行的,表明在临床环境中使用该管道的潜力。
translated by 谷歌翻译
手工和小规模的黄金开采(ASGM)是许多家庭的重要收入来源,但它可以产生巨大的社会和环境影响,尤其是在发展中国家的雨林中。Sentinel-2卫星收集了多光谱图像,可用于检测水位和质量的变化,这表明采矿地点位置。这项工作着重于对秘鲁亚马逊雨林中ASGM活动的认可。我们根据支持向量机(SVM)测试了几个半监督分类器,以检测Madre de Dios地区从2019年到2021年的水体变化,这是ASGM活动的全球热点之一。实验表明,基于SVM的模型可以实现RGB的合理性能(使用Cohen的$ \ kappa $ 0.49)和6通道图像(使用Cohen的$ \ kappa $ 0.71),具有非常有限的注释。还分析了合并实验室色彩空间的功效。
translated by 谷歌翻译
简介:人工智能(AI)有可能促进CMR分析以进行生物标志物提取的自动化。但是,大多数AI算法都经过特定输入域(例如单扫描仪供应商或医院量化成像协议)的培训,并且当从其他输入域中应用于CMR数据时,缺乏最佳性能的鲁棒性。方法:我们提出的框架包括一种基于AI的算法,用于对短轴图像的双脑室分割,然后进行分析后质量控制,以检测错误的结果。分割算法在来自两家NHS医院(n = 2793)的大型临床CMR扫描数据集上进行了培训,并在此数据集(n = 441)和五个外部数据集(n = 6808)上进行了验证。验证数据包括使用所有主要供应商的CMR扫描仪在12个不同中心获得的一系列疾病的患者的CMR扫描。结果:我们的方法产生的中位骰子得分超过87%,转化为观察者间变异范围内心脏生物标志物中的中值绝对错误:<8.4ml(左心室),<9.2ml(右心室),<13.3G(左心室),<13.3G(左心室所有数据集的心室质量),<5.9%(射血分数)。根据心脏疾病和扫描仪供应商的表型的病例分层显示出良好的一致性。结论:我们表明,我们提出的工具结合了在大规模多域CMR数据集中训练的最先进的AI算法和分析后质量控制,使我们能够从多个中心,供应商和心脏病。这是AI算法临床翻译的基本步骤。此外,我们的方法以无需额外的计算成本而产生一系列心脏功能(填充和弹出率,区域壁运动和应变)的附加生物标志物。
translated by 谷歌翻译
尽管电子健康记录是生物医学研究的丰富数据来源,但这些系统并未在医疗环境中统一地实施,并且由于医疗保健碎片化和孤立的电子健康记录之间缺乏互操作性,可能缺少大量数据。考虑到缺少数据的案例的删除可能会在随后的分析中引起严重的偏见,因此,一些作者更喜欢采用多重插补策略来恢复缺失的信息。不幸的是,尽管几项文献作品已经通过使用现在可以自由研究的任何不同的多个归档算法记录了有希望的结果,但尚无共识,MI算法效果最好。除了选择MI策略之外,归纳算法及其应用程序设置的选择也至关重要且具有挑战性。在本文中,受鲁宾和范布伦的开创性作品的启发,我们提出了一个方法学框架,可以应用于评估和比较多种多个插补技术,旨在选择用于计算临床研究工作中最有效的推断。我们的框架已被应用于验证和扩展较大的队列,这是我们在先前的文献研究中提出的结果,我们在其中评估了关键患者的描述符和Covid-19的影响在2型糖尿病患者中的影响,其数据为2型糖尿病,其数据为2型糖尿病由国家共同队列合作飞地提供。
translated by 谷歌翻译
目的:(1)开发深度学习算法,以识别3D光学相干断层扫描(OCT)扫描中的视神经头(ONH)的主要组织结构; (2)利用这些信息在健康,光盘博森(奇数)和乳头膜ONHS之间鲁棒地区分。由于高颅内压(51只眼)和健康对照(100只眼睛),这是一种横截面对比研究,由于高颅内压(51只眼睛),以及健康的对照(100只眼)。使用OCT获得ONH的3D扫描,然后加工以改善深层组织可见性。首先,使用984 B-Scans(从130只眼睛)开发了深度学习算法,以识别:主要的神经/结缔组织和奇数区域。使用骰子系数(DC)评估我们的算法的性能。在第2步骤中,使用1500Ct卷设计了一个分类算法(随机林),以严格从其德鲁森和普拉拉马那肿胀得分(来自细分)来执行3级分类(1:奇数,2:Papilledema,3:健康) )。为了评估性能,我们报告了每个类的接收器操作特征曲线(AUC)下的区域。我们的分割算法能够在存在时隔离神经和结缔组织和奇数区域。这是在测试集上的平均DC为0.93 $ 0.03的平均直流,相应于良好性能。分类是用高AUC的分类,即检测奇数,0.99美元0.01 0.01美元,用于检测Papilledema的0.99美元,0.98美元$ 0.02用于检测健康的ONH。我们的AI方法可以使用单个OCT扫描来准确地歧视奇数乳头。我们的分类表现非常出色,有需要在更大的人口中验证。我们的方法可能有可能建立10月作为神经眼科诊断成像的主干。
translated by 谷歌翻译
为N($ ^ 4 $ s)+ o $ _呈现和定量测试了一种用于预测来自特定初始状态(状态为分布或STD)的产品状态分布的机器学习(ML)模型。 {2} $(x $ ^ 3 \ sigma _ {\ rm g} ^ { - } $)$ \ lightarrow $ no(x $ ^ 2 \ pi $)+ o($ ^ 3 $ p)反应。用于训练神经网络(NN)的参考数据集由用于$ \ SIM 2000 $初始条件的显式准古典轨迹(QCT)模拟确定的最终状态分布。总体而言,通过根均方平方差价量化的预测精度$(\ SIM 0.003)$和$ r ^ 2 $ $(\ SIM 0.99)$之间的参考QCT和STD模型的预测很高测试集和离网状态特定的初始条件和从反应性状态分布中汲取的初始条件,其特征在于通过平移,旋转和振动温度。与在相同的初始状态分布上评估的更粗糙的粒度分布 - 分布(DTD)模型相比,STD模型表明了在反应物制剂中的状态分辨率的额外益处具有相当的性能。从特定的初始状态开始,还导致更多样化的最终状态分布,需要更具表现力的神经网络与DTD相比。显式QCT模拟之间的直接比较,STD模型和广泛使用的Larsen-Borgnakke(LB)模型表明,STD模型是定量的,而LB模型最适合旋转分布$ P(J')$和失败振动分布$ p(v')$。因此,STD模型可以非常适合模拟非预测高速流,例如,使用直接仿真蒙特卡罗方法。
translated by 谷歌翻译
Objective: Traumatic brain injury can be caused by head impacts, but many brain injury risk estimation models are not equally accurate across the variety of impacts that patients may undergo and the characteristics of different types of impacts are not well studied. We investigated the spectral characteristics of different head impact types with kinematics classification. Methods: Data was analyzed from 3,262 head impacts from lab reconstruction, American football, mixed martial arts, and publicly available car crash data. A random forest classifier with spectral densities of linear acceleration and angular velocity was built to classify head impact types (e.g., football, car crash, mixed martial arts). To test the classifier robustness, another 271 lab-reconstructed impacts were obtained from 5 other instrumented mouthguards. Finally, with the classifier, type-specific, nearest-neighbor regression models were built for brain strain. Results: The classifier reached a median accuracy of 96% over 1,000 random partitions of training and test sets. The most important features in the classification included both low-frequency and high-frequency features, both linear acceleration features and angular velocity features. Different head impact types had different distributions of spectral densities in low-frequency and high-frequency ranges (e.g., the spectral densities of MMA impacts were higher in high-frequency range than in the low-frequency range). The type-specific regression showed a generally higher R^2-value than baseline models without classification. Conclusion: The machine-learning-based classifier enables a better understanding of the impact kinematics spectral density in different sports, and it can be applied to evaluate the quality of impact-simulation systems and on-field data augmentation.
translated by 谷歌翻译